32 research outputs found
A convex polynomial that is not sos-convex
A multivariate polynomial is sos-convex if its Hessian
can be factored as with a possibly nonsquare
polynomial matrix . It is easy to see that sos-convexity is a sufficient
condition for convexity of . Moreover, the problem of deciding
sos-convexity of a polynomial can be cast as the feasibility of a semidefinite
program, which can be solved efficiently. Motivated by this computational
tractability, it has been recently speculated whether sos-convexity is also a
necessary condition for convexity of polynomials. In this paper, we give a
negative answer to this question by presenting an explicit example of a
trivariate homogeneous polynomial of degree eight that is convex but not
sos-convex. Interestingly, our example is found with software using sum of
squares programming techniques and the duality theory of semidefinite
optimization. As a byproduct of our numerical procedure, we obtain a simple
method for searching over a restricted family of nonnegative polynomials that
are not sums of squares.Comment: 15 page
Sums of hermitian squares and the BMV conjecture
Recently Lieb and Seiringer showed that the Bessis-Moussa-Villani conjecture
from quantum physics can be restated in the following purely algebraic way: The
sum of all words in two positive semidefinite matrices where the number of each
of the two letters is fixed is always a matrix with nonnegative trace. We show
that this statement holds if the words are of length at most 13. This has
previously been known only up to length 7. In our proof, we establish a
connection to sums of hermitian squares of polynomials in noncommuting
variables and to semidefinite programming. As a by-product we obtain an example
of a real polynomial in two noncommuting variables having nonnegative trace on
all symmetric matrices of the same size, yet not being a sum of hermitian
squares and commutators.Comment: 21 pages; minor changes; a companion Mathematica notebook is now
available in the source fil
The matricial relaxation of a linear matrix inequality
Given linear matrix inequalities (LMIs) L_1 and L_2, it is natural to ask:
(Q1) when does one dominate the other, that is, does L_1(X) PsD imply L_2(X)
PsD? (Q2) when do they have the same solution set? Such questions can be
NP-hard. This paper describes a natural relaxation of an LMI, based on
substituting matrices for the variables x_j. With this relaxation, the
domination questions (Q1) and (Q2) have elegant answers, indeed reduce to
constructible semidefinite programs. Assume there is an X such that L_1(X) and
L_2(X) are both PD, and suppose the positivity domain of L_1 is bounded. For
our "matrix variable" relaxation a positive answer to (Q1) is equivalent to the
existence of matrices V_j such that L_2(x)=V_1^* L_1(x) V_1 + ... + V_k^*
L_1(x) V_k. As for (Q2) we show that, up to redundancy, L_1 and L_2 are
unitarily equivalent.
Such algebraic certificates are typically called Positivstellensaetze and the
above are examples of such for linear polynomials. The paper goes on to derive
a cleaner and more powerful Putinar-type Positivstellensatz for polynomials
positive on a bounded set of the form {X | L(X) PsD}.
An observation at the core of the paper is that the relaxed LMI domination
problem is equivalent to a classical problem. Namely, the problem of
determining if a linear map from a subspace of matrices to a matrix algebra is
"completely positive".Comment: v1: 34 pages, v2: 41 pages; supplementary material is available in
the source file, or see http://srag.fmf.uni-lj.si
Generating Non-Linear Interpolants by Semidefinite Programming
Interpolation-based techniques have been widely and successfully applied in
the verification of hardware and software, e.g., in bounded-model check- ing,
CEGAR, SMT, etc., whose hardest part is how to synthesize interpolants. Various
work for discovering interpolants for propositional logic, quantifier-free
fragments of first-order theories and their combinations have been proposed.
However, little work focuses on discovering polynomial interpolants in the
literature. In this paper, we provide an approach for constructing non-linear
interpolants based on semidefinite programming, and show how to apply such
results to the verification of programs by examples.Comment: 22 pages, 4 figure
Certification of Bounds of Non-linear Functions: the Templates Method
The aim of this work is to certify lower bounds for real-valued multivariate
functions, defined by semialgebraic or transcendental expressions. The
certificate must be, eventually, formally provable in a proof system such as
Coq. The application range for such a tool is widespread; for instance Hales'
proof of Kepler's conjecture yields thousands of inequalities. We introduce an
approximation algorithm, which combines ideas of the max-plus basis method (in
optimal control) and of the linear templates method developed by Manna et al.
(in static analysis). This algorithm consists in bounding some of the
constituents of the function by suprema of quadratic forms with a well chosen
curvature. This leads to semialgebraic optimization problems, solved by
sum-of-squares relaxations. Templates limit the blow up of these relaxations at
the price of coarsening the approximation. We illustrate the efficiency of our
framework with various examples from the literature and discuss the interfacing
with Coq.Comment: 16 pages, 3 figures, 2 table
An Optimization Approach to Weak Approximation of LĂ©vy-Driven Stochastic Differential Equations
We propose an optimization approach to weak approximation of LĂ©vy-driven stochastic differential equations. We employ a mathematical programming framework to obtain numerically upper and lower bound estimates of the target expectation, where the optimization procedure ends up with a polynomial programming problem. An advantage of our approach is that all we need is a closed form of the LĂ©vy measure, not the exact simulation knowledge of the increments or of a shot noise representation for the time discretization approximation. We also investigate methods for approximation at some different intermediate time points simultaneously
NP-hardness of Deciding Convexity of Quartic Polynomials and Related Problems
We show that unless P=NP, there exists no polynomial time (or even
pseudo-polynomial time) algorithm that can decide whether a multivariate
polynomial of degree four (or higher even degree) is globally convex. This
solves a problem that has been open since 1992 when N. Z. Shor asked for the
complexity of deciding convexity for quartic polynomials. We also prove that
deciding strict convexity, strong convexity, quasiconvexity, and
pseudoconvexity of polynomials of even degree four or higher is strongly
NP-hard. By contrast, we show that quasiconvexity and pseudoconvexity of odd
degree polynomials can be decided in polynomial time.Comment: 20 page
A Polynomial Optimization Approach to Constant Rebalanced Portfolio Selection
We address the multi-period portfolio optimization problem with the constant rebalancing strategy. This problem is formulated as a polynomial optimization problem (POP) by using a mean-variance criterion. In order to solve the POPs of high degree, we develop a cutting-plane algorithm based on semidefinite programming. Our algorithm can solve problems that can not be handled by any of known polynomial optimization solvers.
Nonlinear Integer Programming
Research efforts of the past fifty years have led to a development of linear
integer programming as a mature discipline of mathematical optimization. Such a
level of maturity has not been reached when one considers nonlinear systems
subject to integrality requirements for the variables. This chapter is
dedicated to this topic.
The primary goal is a study of a simple version of general nonlinear integer
problems, where all constraints are still linear. Our focus is on the
computational complexity of the problem, which varies significantly with the
type of nonlinear objective function in combination with the underlying
combinatorial structure. Numerous boundary cases of complexity emerge, which
sometimes surprisingly lead even to polynomial time algorithms.
We also cover recent successful approaches for more general classes of
problems. Though no positive theoretical efficiency results are available, nor
are they likely to ever be available, these seem to be the currently most
successful and interesting approaches for solving practical problems.
It is our belief that the study of algorithms motivated by theoretical
considerations and those motivated by our desire to solve practical instances
should and do inform one another. So it is with this viewpoint that we present
the subject, and it is in this direction that we hope to spark further
research.Comment: 57 pages. To appear in: M. J\"unger, T. Liebling, D. Naddef, G.
Nemhauser, W. Pulleyblank, G. Reinelt, G. Rinaldi, and L. Wolsey (eds.), 50
Years of Integer Programming 1958--2008: The Early Years and State-of-the-Art
Surveys, Springer-Verlag, 2009, ISBN 354068274
The Convex Geometry of Linear Inverse Problems
In applications throughout science and engineering one is often faced with
the challenge of solving an ill-posed inverse problem, where the number of
available measurements is smaller than the dimension of the model to be
estimated. However in many practical situations of interest, models are
constrained structurally so that they only have a few degrees of freedom
relative to their ambient dimension. This paper provides a general framework to
convert notions of simplicity into convex penalty functions, resulting in
convex optimization solutions to linear, underdetermined inverse problems. The
class of simple models considered are those formed as the sum of a few atoms
from some (possibly infinite) elementary atomic set; examples include
well-studied cases such as sparse vectors and low-rank matrices, as well as
several others including sums of a few permutations matrices, low-rank tensors,
orthogonal matrices, and atomic measures. The convex programming formulation is
based on minimizing the norm induced by the convex hull of the atomic set; this
norm is referred to as the atomic norm. The facial structure of the atomic norm
ball carries a number of favorable properties that are useful for recovering
simple models, and an analysis of the underlying convex geometry provides sharp
estimates of the number of generic measurements required for exact and robust
recovery of models from partial information. These estimates are based on
computing the Gaussian widths of tangent cones to the atomic norm ball. When
the atomic set has algebraic structure the resulting optimization problems can
be solved or approximated via semidefinite programming. The quality of these
approximations affects the number of measurements required for recovery. Thus
this work extends the catalog of simple models that can be recovered from
limited linear information via tractable convex programming